home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Columbia Kermit
/
kermit.zip
/
newsgroups
/
misc.19950726-19950929
/
000000_news@columbia.edu_Wed Jul 26 00:07:26 1995.msg
next >
Wrap
Internet Message Format
|
2020-01-01
|
6KB
Received: from apakabar.cc.columbia.edu by watsun.cc.columbia.edu with SMTP id AA06562
(5.65c+CU/IDA-1.4.4/HLK for <kermit.misc@watsun.cc.columbia.edu>); Tue, 25 Jul 1995 20:07:32 -0400
Received: by apakabar.cc.columbia.edu id AA22300
(5.65c+CU/IDA-1.4.4/HLK for kermit.misc@watsun); Tue, 25 Jul 1995 20:07:31 -0400
Path: news.columbia.edu!watsun.cc.columbia.edu!fdc
From: fdc@watsun.cc.columbia.edu (Frank da Cruz)
Newsgroups: comp.protocols.kermit.misc
Subject: Re: Kermit download from CompuServe.. best setup??
Date: 26 Jul 1995 00:07:26 GMT
Organization: Columbia University
Lines: 88
Message-Id: <3v40vu$loq@apakabar.cc.columbia.edu>
References: <3uidtu$r5c@hpber004.swiss.hp.com> <MpREww8Z7mxH084yn@netcom.com> <DC7oIH.6IA@omen.com> <kwOFww8Z7GDV084yn@netcom.com>
Nntp-Posting-Host: watsun.cc.columbia.edu
Apparently-To: kermit.misc@watsun.cc.columbia.edu
In article <kwOFww8Z7GDV084yn@netcom.com>,
Jeffrey Hurwit <jhurwit@netcom.com> wrote:
>In article <DC7oIH.6IA@omen.com>, caf@omen.com (Chuck Forsberg WA7KGX) wrote:
>>The complexity of the Kermit protocol with its window management and
>>other features exacts a penalty in CPU resources.
> To be perfectly honest, I'm not familiar with zmodem. However,
> there was quite a bit of discussion in one of our ISP-local news
> groups about disconnects at 10 minutes during transfers using sz.
> It was reasoned that 10 minutes indicated the idle daemon kicking
> in and logging out sessions, and the solution was found to be to
> use an sz option to enable windows. Why does sz offer this
> feature, if it's known to be detrimental in some way?
>
Different philosophies regarding windowing and, for that matter, basic
tuning defaults. Sometimes you need windows (small "w" :-). In the case
you cite, it's because you need *some* reverse-channel activity to tickle
your idle daemon.
I think everybody understands by now, and Chuck will agree it is a fair
statement, that ZMODEM is tuned, by default, for maximum speed, whereas
C-Kermit is tuned, by default, for safety (robustness). Kermit, however,
does not totally give up its robustness features when you tune it for
speed, so it is not exactly accurate to say (not that anyone has, but I
know you're all thinking it :-) that Kermit, thus tuned, is the same
ZMODEM, any more than it is to say that ZMODEM, forced to use a window
size and escape all control characters, is the same as Kermit.
One of the big differences is in the windowing strategy. ZMODEM's is
"go-back-to-n", Kermit's is "selective repeat". Go-back-to-n means, "if
there is an error, go back to the spot where the error was detected and
start over again from there". Selective repeat means, "if there is an
error, retransmit only the piece that had the error".
Go-back-to-n is more efficient as long as there are no errors, because, as
Chuck implies, there is less bookkeeping involved. However, go-back-to-n
is less efficient if it must recover from errors, especially when there is
a long round-trip delay, because a lot more stuff is already in the pipe
by the time the error is detected, and all of it must be sent again.
However, both protocols work. I would say it is simply a matter of
preference on the part of a well-informed user. Do you care more about
getting the best speed on good connections, or getting good speed on ALL
connections? (This is not to say that ZMODEM is necessarily faster than
Kermit on good connections, but I think it can be demonstrated that
ZMODEM's speed goes down much faster as the connection deteriorates than
Kermit's does, and moreso if the connection has long round-trip delays.)
>>time kermit -s b17mh.gif
> tells Kermit to do newline and charset translations, which might
> account for some CPU power. At the least, you'd have a corrupted
> gif file on the other end.
>
Actually, I think Chuck is pretty competent Kermit user :-) I'm sure he
has binary mode and the various other performance options set in his
initialization file.
If you look at the elapsed times, they are not that different; I would
classify them as neglible and not worth quibbling over. Kermit's CPU
times are indeed higher, but who knows what Chuck's init file is doing.
Maybe it's calculating Pi to a million digits (you can do that with
Kermit's command language :-) (JUST KIDDING!)
Chuck has made the point that Kermit has all this startup overhead that
Professional-YAM(tm) doesn't have. What can I say, he's right. It's
setting up dialing and services directories, defining all kinds of
macros, and so on. These are convenience features that a lot of people
need and depend on. It's nice to have them there. The "power user",
of course, can bypass all that and go for the speed.
Even then, I don't doubt that Chuck's code eats less CPU, even after we
enter packet mode. That's because it doesn't do as much. Even when
Kermit is not translating character sets, prefixing control characters,
handling 8-bit characters on 7-bit links via single or locking shifts,
etc, it still has to make those tests, and it still manages its window.
Yes, I could put in some time on micro-optimizing the code to avoid these
tests (generally at the penalty of increased code size), but the only case
I heard of where the CPU utilization posed a problem was on a 10+-year-old
MicroVAX that had a single-character-interrupt serial port and a very
small memory, and in that case optimizing Kermit's "inner loop" would not
have helped a bit.
Anyway, if we were all so concerned about conserving scarce CPU cycles, we
would not all be rushing install the latest graphical operating system on
our PCs :-) Hmmm... wait, now I think I'm beginning to see Chuck's subtle
point...
- Frank